Here's something you've felt in your bones but probably haven't said out loud: improving your codebase makes it temporarily worse.
Not worse in the "we shipped bugs" sense — though that happens too. Worse in the cognitive sense. The moment you start doing something a better way, you now have two ways in your codebase. The old way and the new way. And until the migration is complete, everyone needs to hold both in their head, know which is which, and understand that the inconsistency is intentional.
This tension between consistency and improvement has no clean resolution. Both matter. Both pull in opposite directions. And the space between them is where most of the unglamorous, invisible work of software engineering actually happens.
Consistency Is Cognitive Infrastructure
When I say consistency matters, I'm not talking about code being pretty. I'm talking about your brain's ability to function.
When naming conventions are predictable, patterns repeat, and similar problems get solved similarly, developers can build a mental model of the codebase and reuse it. You understand one service, you understand them all. You read one repository method, you can predict how the others work. The codebase becomes learnable by pattern, not piece by piece.
Think of it as cognitive infrastructure. The more homogeneous a codebase is, the fewer distinct concepts someone needs to juggle in working memory. And working memory, as cognitive science keeps reminding us, is painfully limited. Consistency is how you cheat those limits.
The worst codebase isn't one that does things badly. It's one that does things differently everywhere. Because then every file is a fresh puzzle, and your brain never gets to settle into a groove.
But Code Lives in Time
Here's the other side: codebases aren't museum pieces. They evolve.
Different people contribute over months and years, each with their own instincts and expertise. Libraries release new versions that change best practices. Requirements shift. What was reasonable architecture for three endpoints starts groaning under thirty. Someone discovers that the "clean" pattern everyone's been using is actually a well-known antipattern.
At some point, somebody spots a better way. Maybe the repository implementation has been causing friction everywhere. Maybe the whole system needs to go async. Maybe a naming convention has been confusing people for months.
The impulse to improve is healthy — it's what separates living code from fossil code. The question isn't whether to improve. It's how to do it without destroying the consistency that makes the code workable.
Your Three Options (And Their Prices)
When you identify something that could be better, you've got three paths. None is free.
Option one: Do nothing. Swallow the imperfection and keep shipping. Some people handle this fine. Others lose sleep over it. Either way, the "debt" accumulates — not in the dramatic "system collapse" sense, but in that slow erosion of confidence that comes from working in code you know could be better.
Option two: Change everything at once. The Big Bang Refactoring. Rewrite all the repositories, migrate the entire codebase to async, rename everything in one glorious pull request. This is universally discouraged for good reasons: large changesets are nightmares to review, impossible to test comprehensively, and introduce the worst kind of bugs — the subtle ones that slip through because your tests were written against the old assumptions.
Option three: Change incrementally. Do it in pieces. New features use the new approach. Existing code migrates gradually. This is the recommended practice, and it genuinely works better than the alternatives. But it has a cost that nobody talks about: for potentially months, your codebase is actively less consistent than before you started improving it.
You're trading temporary chaos for long-term improvement. Which is fine — as long as you actually finish.
The AI Acceleration Problem
AI coding assistants have made this whole dynamic faster and messier.
When a little fella can generate a new implementation approach in thirty seconds, the temptation to experiment grows. You can try three different architectural patterns in the time it used to take to implement one. That's amazing for exploration — until you realize your codebase now has three different ways to handle the same problem, and nobody remembers which one you decided to standardize on.
The speed advantage cuts both ways. Yes, you can refactor faster. But you can also create inconsistencies faster. Teams that used to carefully plan migrations before starting them now find themselves knee-deep in half-finished refactors because it was so easy to begin.
Here's the weird thing, though: AI is also surprisingly good at enforcing consistency once you've decided on a pattern. Tell it "convert this to match the established pattern" and it'll often get it right. The challenge isn't execution — it's deciding which pattern deserves to be the standard.
The Graveyard of Abandoned Migrations
Finishing migrations is where good intentions go to die.
Migrations compete for priority with features that have visible business value. Features show up in stakeholder demos and revenue dashboards. Migrations deliver "better code" and "less friction" — outcomes that live in the invisible engineering world. Guess which one wins the sprint planning meeting?
So migrations stall. You get to 60% completion, something urgent comes along, and the migration gets deprioritized. The person who championed it switches teams or forgets about it. Now you have two coexisting approaches with no clear winner.
This sends a devastating message to anyone reading the code: there are no standards here, so anything goes. If the codebase already has two ways to do the same thing, why not add a third? This is how you end up with what I call Frankenstein codebases — systems where every corner follows different rules and the only consistent thing is the inconsistency.
I've seen this pattern in codebases across different languages, different domains, different companies. They all have the same flavor: that overwhelming sense that each file was written by someone who either didn't know or didn't care what the neighboring files were doing. Entropy has a recognizable shape.
Standards Proliferation at Scale
This plays out at the organizational level too. There's an xkcd that nails it:
> "Situation: there are 14 competing standards."
> "We need to develop one universal standard that covers everyone's use cases."
> "Situation: there are 15 competing standards."
I watched this happen with AI tool integration protocols. Different teams, solving similar problems, each built their own approach. One used OAuth, another used API keys, a third built something custom. All reasonable choices in isolation. But when someone tried to create a unified standard so AI agents could reach all the tools, the "unification" effort just became approach number N+1.
The root cause is structural. Early exploration needs freedom — you can't standardize what you don't understand yet. But by the time standardization becomes necessary, you already have a garden of incompatible approaches. The cost of unifying them is huge, and nobody budgeted for it.
Even Typos Are Trade-offs
This tension shows up at the smallest scale too.
You're building a feature and notice a typo: calcualte_total. Takes five seconds to fix. Your IDE can rename across the whole codebase automatically.
Do you fix it?
If you do, your PR now has mixed concerns — the feature plus an unrelated rename. The reviewer sees files changing for no apparent reason. It's minor, but it muddles the intent.
Better practice: finish your feature, submit that PR, then immediately create a second PR with just the typo fix. A review of "renamed calcualte_total to calculate_total" is trivially easy to approve. The concerns are separated.
This sounds pedantic. It's not. It's the same principle scaled down: keep changes atomic and single-purpose. And don't create a backlog ticket for the typo — that's where good intentions go to die. You know the fix, you're already in context. Two minutes now beats an eternity in Jira.
The Invisible Value Trap
Neither consistency nor improvement has obvious business value.
Consistency makes code "easier to understand" — but understanding doesn't appear on dashboards. Improvement reduces "future friction" — but you can't measure that against the counterfactual where you didn't improve it. Both live in the invisible engineering world that keeps the visible world running.
Engineers exist in two realities: the business world of features and deadlines, and the engineering world of quality and sustainability. Guess which one gets prioritized when push comes to shove?
Advocating for refactoring over features is almost always an uphill battle — unless the situation has deteriorated so badly that team morale is collapsing. By then you're not improving proactively, you're triaging an emergency.
No Sweet Spot, Only Judgment Calls
I've tried to find the perfect balance and keep landing on "it depends," which is the most honest and least satisfying answer in software.
Kent Beck once observed that expensive decisions, like financial investments, benefit from delay because money loses value over time. Maybe that migration costing 100 hours today will be cheaper next quarter in real terms.
But code isn't currency. The longer you defer a migration, the more code gets written against the old pattern, making the eventual migration more expensive. Technical debt compounds, and unlike financial debt, the interest rate increases over time.
Plus there's a cost that doesn't show up in any economic model: morale. Nobody enjoys working in a messy codebase. The abandoned migrations, the inconsistent patterns, the "just ignore that part" comments — they erode craftsmanship. Once that erosion starts, it's self-reinforcing. Why do things the "right" way when the codebase clearly doesn't care?
Living With the Tension
We don't resolve this tension. We manage it.
The key is making deliberate choices. Knowing you're trading temporary inconsistency for long-term improvement is strategy. Accidentally drifting into chaos because nobody decided anything? That's just entropy winning by default.
Finish what you start. Keep your changes atomic. Communicate your migrations clearly. Accept that things will be worse before they get better. And maybe most importantly: when AI offers you the ability to try five different approaches quickly, resist the temptation unless you're genuinely committed to picking one and seeing it through.
The tension between consistency and improvement isn't a problem to solve. It's a dynamic to navigate. And in our new world of AI-accelerated development, learning to navigate it skillfully might be more important than ever.